21 research outputs found

    Dynamics of neural cryptography

    Full text link
    Synchronization of neural networks has been used for novel public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.Comment: 9 pages, 15 figures; typos correcte

    Synchronization of random walks with reflecting boundaries

    Full text link
    Reflecting boundary conditions cause two one-dimensional random walks to synchronize if a common direction is chosen in each step. The mean synchronization time and its standard deviation are calculated analytically. Both quantities are found to increase proportional to the square of the system size. Additionally, the probability of synchronization in a given step is analyzed, which converges to a geometric distribution for long synchronization times. From this asymptotic behavior the number of steps required to synchronize an ensemble of independent random walk pairs is deduced. Here the synchronization time increases with the logarithm of the ensemble size. The results of this model are compared to those observed in neural synchronization.Comment: 10 pages, 7 figures; introduction changed, typos correcte

    Successful attack on permutation-parity-machine-based neural cryptography

    Full text link
    An algorithm is presented which implements a probabilistic attack on the key-exchange protocol based on permutation parity machines. Instead of imitating the synchronization of the communicating partners, the strategy consists of a Monte Carlo method to sample the space of possible weights during inner rounds and an analytic approach to convey the extracted information from one outer round to the next one. The results show that the protocol under attack fails to synchronize faster than an eavesdropper using this algorithm.Comment: 4 pages, 2 figures; abstract changed, note about chaos cryptography added, typos correcte

    Genetic attack on neural cryptography

    Full text link
    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.Comment: 8 pages, 12 figures; section 5 amended, typos correcte

    Efficient statistical inference for stochastic reaction processes

    Full text link
    We address the problem of estimating unknown model parameters and state variables in stochastic reaction processes when only sparse and noisy measurements are available. Using an asymptotic system size expansion for the backward equation we derive an efficient approximation for this problem. We demonstrate the validity of our approach on model systems and generalize our method to the case when some state variables are not observed.Comment: 4 pages, 2 figures, 2 tables; typos corrected, remark about Kalman smoother adde

    Neuronale Synchronisation und Kryptographie

    No full text
    Neural networks can synchronize by learning from each other. For that purpose they receive common inputs and exchange their outputs. Adjusting discrete weights according to a suitable learning rule then leads to full synchronization in a finite number of steps. It is also possible to train additional neural networks by using the inputs and outputs generated during this process as examples. Several algorithms for both tasks are presented and analyzed. In the case of Tree Parity Machines the dynamics of both processes is driven by attractive and repulsive stochastic forces. Thus it can be described well by models based on random walks, which represent either the weights themselves or order parameters of their distribution. However, synchronization is much faster than learning. This effect is caused by different frequencies of attractive and repulsive steps, as only neural networks interacting with each other are able to skip unsuitable inputs. Scaling laws for the number of steps needed for full synchronization and successful learning are derived using analytical models. They indicate that the difference between both processes can be controlled by changing the synaptic depth. In the case of bidirectional interaction the synchronization time increases proportional to the square of this parameter, but it grows exponentially, if information is transmitted in one direction only. Because of this effect neural synchronization can be used to construct a cryptographic key-exchange protocol. Here the partners benefit from mutual interaction, so that a passive attacker is usually unable to learn the generated key in time. The success probabilities of different attack methods are determined by numerical simulations and scaling laws are derived from the data. If the synaptic depth is increased, the complexity of a successful attack grows exponentially, but there is only a polynomial increase of the effort needed to generate a key. Therefore the partners can reach any desired level of security by choosing suitable parameters. In addition, the entropy of the weight distribution is used to determine the effective number of keys, which are generated in different runs of the key-exchange protocol using the same sequence of input vectors. If the common random inputs are replaced with queries, synchronization is possible, too. However, the partners have more control over the difficulty of the key exchange and the attacks. Therefore they can improve the security without increasing the average synchronization time.Neuronale Netze, die die gleichen Eingaben erhalten und ihre Ausgaben austauschen, können voneinander lernen und auf diese Weise synchronisieren. Wenn diskrete Gewichte und eine geeignete Lernregel verwendet werden, kommt es in endlich vielen Schritten zur vollständigen Synchronisation. Mit den dabei erzeugten Beispielen lassen sich weitere neuronale Netze trainieren. Es werden mehrere Algorithmen für beide Aufgaben vorgestellt und untersucht. Attraktive und repulsive Zufallskräfte treiben bei Tree Parity Machines sowohl den Synchronisationsvorgang als auch die Lernprozesse an, so dass sich alle Abläufe gut durch Random-Walk-Modelle beschreiben lassen. Dabei sind die Random Walks entweder die Gewichte selbst oder Ordnungsparameter ihrer Verteilung. Allerdings sind miteinander wechselwirkende neuronale Netze in der Lage, ungeeignete Eingaben zu überspringen und so repulsive Schritte teilweise zu vermeiden. Deshalb können Tree Parity Machines schneller synchronisieren als lernen. Aus analytischen Modellen abgeleitete Skalengesetze zeigen, dass der Unterschied zwischen beiden Vorgängen von der synaptischen Tiefe abhängt. Wenn die beiden neuronalen Netze sich gegenseitig beeinflussen können, steigt die Synchronisationszeit nur proportional zu diesem Parameter an; sie wächst jedoch exponentiell, sobald die Informationen nur in eine Richtung fließen. Deswegen lässt sich mittels neuronaler Synchronisation ein kryptographisches Schlüsselaustauschprotokoll realisieren. Da die Partner sich gegenseitig beeinflussen, der Angreifer diese Möglichkeit aber nicht hat, gelingt es ihm meistens nicht, den erzeugten Schlüssel rechtzeitig zu finden. Die Erfolgswahrscheinlichkeiten der verschiedenen Angriffe werden mittels numerischer Simulationen bestimmt. Die dabei gefundenen Skalengesetze zeigen, dass die Komplexität eines erfolgreichen Angriffs exponentiell mit der synaptischen Tiefe ansteigt, aber der Aufwand für den Schlüsselaustausch selbst nur polynomial anwächst. Somit können die Partner jedes beliebige Sicherheitsniveau durch geeignete Wahl der Parameter erreichen. Außerdem wird die effektive Zahl der Schlüssel berechnet, die das Schlüsselaustauschprotokoll bei vorgegebener Zeitreihe der Eingaben erzeugen kann. Der neuronale Schlüsselaustausch funktioniert auch dann, wenn die Zufallseingaben durch Queries ersetzt werden. Jedoch haben die Partner in diesem Fall mehr Kontrolle über die Komplexität der Synchronisation und der Angriffe. Deshalb gelingt es, die Sicherheit zu verbessern, ohne den Aufwand zu erhöhen
    corecore